6 research outputs found

    Next Generation of Product Search and Discovery

    Get PDF
    Online shopping has become an important part of people’s daily life with the rapid development of e-commerce. In some domains such as books, electronics, and CD/DVDs, online shopping has surpassed or even replaced the traditional shopping method. Compared with traditional retailing, e-commerce is information intensive. One of the key factors to succeed in e-business is how to facilitate the consumers’ approaches to discover a product. Conventionally a product search engine based on a keyword search or category browser is provided to help users find the product information they need. The general goal of a product search system is to enable users to quickly locate information of interest and to minimize users’ efforts in search and navigation. In this process human factors play a significant role. Finding product information could be a tricky task and may require an intelligent use of search engines, and a non-trivial navigation of multilayer categories. Searching for useful product information can be frustrating for many users, especially those inexperienced users. This dissertation focuses on developing a new visual product search system that effectively extracts the properties of unstructured products, and presents the possible items of attraction to users so that the users can quickly locate the ones they would be most likely interested in. We designed and developed a feature extraction algorithm that retains product color and local pattern features, and the experimental evaluation on the benchmark dataset demonstrated that it is robust against common geometric and photometric visual distortions. Besides, instead of ignoring product text information, we investigated and developed a ranking model learned via a unified probabilistic hypergraph that is capable of capturing correlations among product visual content and textual content. Moreover, we proposed and designed a fuzzy hierarchical co-clustering algorithm for the collaborative filtering product recommendation. Via this method, users can be automatically grouped into different interest communities based on their behaviors. Then, a customized recommendation can be performed according to these implicitly detected relations. In summary, the developed search system performs much better in a visual unstructured product search when compared with state-of-art approaches. With the comprehensive ranking scheme and the collaborative filtering recommendation module, the user’s overhead in locating the information of value is reduced, and the user’s experience of seeking for useful product information is optimized

    Next generation of product search and discovery: Visual search and recommendation

    No full text
    Online shopping has become an important part of people’s daily life with the rapid development of e-commerce. In some domains such as books, electronics, and CD/DVDs, online shopping has surpassed or even replaced the traditional shopping method. Compared with traditional retailing, e-commerce is information intensive. One of the key factors to succeed in e-business is how to facilitate the consumers’ approaches to discover a product. Conventionally a product search engine based on a keyword search or category browser is provided to help users find the product information they need. The general goal of a product search system is to enable users to quickly locate information of interest and to minimize users’ efforts in search and navigation. In this process human factors play a significant role. Finding product information could be a tricky task and may require an intelligent use of search engines, and a non-trivial navigation of multilayer categories. Searching for useful product information can be frustrating for many users, especially those inexperienced users. This dissertation focuses on developing a new visual product search system that effectively extracts the properties of unstructured products, and presents the possible items of attraction to users so that the users can quickly locate the ones they would be most likely interested in. We designed and developed a feature extraction algorithm that retains product color and local pattern features, and the experimental evaluation on the benchmark dataset demonstrated that it is robust against common geometric and photometric visual distortions. Besides, instead of ignoring product text information, we investigated and developed a ranking model learned via a unified probabilistic hypergraph that is capable of capturing correlations among product visual content and textual content. Moreover, we proposed and designed a fuzzy hierarchical co-clustering algorithm for the collaborative filtering product recommendation. Via this method, users can be automatically grouped into different interest communities based on their behaviors. Then, a customized recommendation can be performed according to these implicitly detected relations. In summary, the developed search system performs much better in a visual unstructured product search when compared with state-of-art approaches. With the comprehensive ranking scheme and the collaborative filtering recommendation module, the user’s overhead in locating the information of value is reduced, and the user’s experience of seeking for useful product information is optimized

    Learn to Rank Images: A Unified Probabilistic Hypergraph Model for Visual Search

    Get PDF
    In visual search systems, it is important to address the issue of how to leverage the rich contextual information in a visual computational model to build more robust visual search systems and to better satisfy the user’s need and intention. In this paper, we introduced a ranking model by understanding the complex relations within product visual and textual information in visual search systems. To understand their complex relations, we focused on using graph-based paradigms to model the relations among product images, product category labels, and product names and descriptions. We developed a unified probabilistic hypergraph ranking algorithm, which, modeling the correlations among product visual features and textual features, extensively enriches the description of the image. We conducted experiments on the proposed ranking algorithm on a dataset collected from a real e-commerce website. The results of our comparison demonstrate that our proposed algorithm extensively improves the retrieval performance over the visual distance based ranking

    A Color Boosted Local Feature Extraction Method for Mobile Product Search

    No full text
    Abstract — Mobile visual search is one popular and promising research area for product search and image retrieval. We present a novel color boosted local feature extraction method based on the SIFT descriptor, which not only maintains robustness and repeatability to certain imaging condition variation, but also retains the salient color and local pattern of the apparel products. The experiments demonstrate the effectiveness of our approach, and show that the proposed method outperforms those available methods on all tested retrieval rates. Index Terms — SIFT, color SIFT, feature extraction, mobile product search I

    Robotics and Deep Learning Framework for Structural Health Monitoring of Utility Pipes

    No full text
    A critical modern-day challenge for utility operators is condition monitoring of underground sewer infrastructure. Existing industry standard for underground sewer line inspection is based on sending a wire-guided robot with a closed circuit television (CCTV) camera through a pipe. A trained operator observes the video feed from the camera, and annotates it to record defects such as cracks, sags, offsets, root infiltrations, grease build up, and lateral protrusions. The success of a CCTV based robot system depends on visual observation and alertness of the operator. There is a likelihood that the operator fatigue and distraction may lead to missed observations. The CCTV based systems are expensive and man-hour intensive. We propose a deep learning based method to make the defect detection process automated without the need for an onsite operator to visually observe the video. The system is based on passing an autonomous camera-mounted robot through the pipe. The recorded video is analyzed using deep learning based algorithms. Our initial focus is to detect presence of cracks in polyvinyl chloride pipes, which are industry standard for sewer installations. We propose a deep learning framework including network architecture to detect presence or absence of a crack in a pipe sample. We also collect empirical data using an autonomous robot during laboratory trials to validate our approach. The data analysis indicates an accuracy of 89.42% in training and 83.3 % in validation. Further data collection and analysis is currently in progress and results will be reported in future. © 2019 IEEE
    corecore